409 research outputs found
What is the mobile termination regime for the asymmetric firms with a calling club effect?.
The aim of our paper is to determine the efficiency of the asymmetric regulation of the Mobile Termination Rates (MTRs) in a market where the firms are differentiated in size and with commercial offers that include calling club effects. Major regulatory issues are related to these analyses, since some European National Regulatory Authorities (NRAs) and the European Commission tend to question asymmetric regulation mechanisms. Based on a model designed to determine firm profits and consumer surplus, our main results are as follows. The asymmetric regulation of the MTRs may contribute to an increase in welfare. If the impact is neutral regarding the firms (a simple reallocation of profits from the large to the small player), the consumer surplus is increased. The appropriate way to proceed is to decrease the large firm’s MTRs rather than increasing the smaller firm’s ones, which could produce negative side effects. From a dynamic point of view, the appropriate asymmetric regulation may contribute to balancing market shares and, in such a way, compensate for first-mover advantages.Economics of telecommunications; Regulation; Networks; Mobile Termination Rates;
Additive Manufacturing of Magnetic Materials for Electric Motor and Generator Applications
This work details the research into the 3D Printing, also known as Additive Manufacturing (AM), of both impermanent and permanent magnets. This work also details the research in enabling such AM magnets in electrical machine applications, primarily motors and generators. The AM processes of many types of magnets are described in detail. The material properties of such AM magnets are also described. The two main types of AM magnets that are discussed in detail are AM NdFeB, and AM Silicon Steel. The implementation of AM NdFeB as rotor magnets, and the implementation of AM Silicon Steel as rotor and stator cores, are discussed in detail. The construction of a working electrical motor made with AM magnets is described. Lastly, future research directions are discussed
Statistical Analysis of Fixed Mini-Batch Gradient Descent Estimator
We study here a fixed mini-batch gradient decent (FMGD) algorithm to solve
optimization problems with massive datasets. In FMGD, the whole sample is split
into multiple non-overlapping partitions. Once the partitions are formed, they
are then fixed throughout the rest of the algorithm. For convenience, we refer
to the fixed partitions as fixed mini-batches. Then for each computation
iteration, the gradients are sequentially calculated on each fixed mini-batch.
Because the size of fixed mini-batches is typically much smaller than the whole
sample size, it can be easily computed. This leads to much reduced computation
cost for each computational iteration. It makes FMGD computationally efficient
and practically more feasible. To demonstrate the theoretical properties of
FMGD, we start with a linear regression model with a constant learning rate. We
study its numerical convergence and statistical efficiency properties. We find
that sufficiently small learning rates are necessarily required for both
numerical convergence and statistical efficiency. Nevertheless, an extremely
small learning rate might lead to painfully slow numerical convergence. To
solve the problem, a diminishing learning rate scheduling strategy can be used.
This leads to the FMGD estimator with faster numerical convergence and better
statistical efficiency. Finally, the FMGD algorithms with random shuffling and
a general loss function are also studied
CHARACTERIZATION AND APPLICATION OF UBIQUITIN E3 LIGASE RNF146 WWE DOMAIN
Poly(ADP-ribosyl)ation (PARylation) is a reversible post-translational modification of cellular proteins. The dynamic process of PARylation is balanced through synthesis of poly(ADP-ribose) (PAR) on substrates by PAR polymerases (PARP) and degradation of the polymer by PAR glycohydrolase. Here we report that overexpression of RNF146 and its PAR-binding WWE domain led to an increase in endogenous PAR level in multiple cell lines. Moreover, we showed that the increase in PARylation level was likely due to an increase in the steady-state level of certain PARylated proteins such as PARP5a (Tankyrase 1) and PARP5b (Tankyrase 2). At a single cell level, we observed the formation of PAR/Tankyrase-rich puncta in HeLa cells by overexpressing RNF146 WWE domain. We also demonstrated the PAR induction by RNF146 WWE domain was tightly linked to its affinity for PAR. Mutations in RNF146 WWE domain reducing PAR binding (Q155A, Y143A and R163A) also impaired PAR/TNKS induction level upon overexpression in cells. We illustrated that such a characteristic property of RNF146 WWE domain was not a common feature shared by every member in the mammalian WWE domain family. Lastly, we proposed and explored the idea of engineering RNF146 WWE domain into a molecular tool for artificially augmenting PARylation level in different subcellular compartments
Histogram-Based Flash Channel Estimation
Current generation Flash devices experience significant read-channel
degradation from damage to the oxide layer during program and erase operations.
Information about the read-channel degradation drives advanced signal
processing methods in Flash to mitigate its effect. In this context, channel
estimation must be ongoing since channel degradation evolves over time and as a
function of the number of program/erase (P/E) cycles. This paper proposes a
framework for ongoing model-based channel estimation using limited channel
measurements (reads). This paper uses a channel model characterizing
degradation resulting from retention time and the amount of charge programmed
and erased. For channel histogram measurements, bin selection to achieve
approximately equal-probability bins yields a good approximation to the
original distribution using only ten bins (i.e. nine reads). With the channel
model and binning strategy in place, this paper explores candidate numerical
least squares algorithms and ultimately demonstrates the effectiveness of the
Levenberg-Marquardt algorithm which provides both speed and accuracy.Comment: 6 pages, 8 figures, Submitted to the IEEE International
Communications Conference (ICC) 201
The Emerging Trends of Multi-Label Learning
Exabytes of data are generated daily by humans, leading to the growing need
for new efforts in dealing with the grand challenges for multi-label learning
brought by big data. For example, extreme multi-label classification is an
active and rapidly growing research area that deals with classification tasks
with an extremely large number of classes or labels; utilizing massive data
with limited supervision to build a multi-label classification model becomes
valuable for practical applications, etc. Besides these, there are tremendous
efforts on how to harvest the strong learning capability of deep learning to
better capture the label dependencies in multi-label learning, which is the key
for deep learning to address real-world classification tasks. However, it is
noted that there has been a lack of systemic studies that focus explicitly on
analyzing the emerging trends and new challenges of multi-label learning in the
era of big data. It is imperative to call for a comprehensive survey to fulfill
this mission and delineate future research directions and new applications.Comment: Accepted to TPAMI 202
Safe and Robust Watermark Injection with a Single OoD Image
Training a high-performance deep neural network requires large amounts of
data and computational resources. Protecting the intellectual property (IP) and
commercial ownership of a deep model is challenging yet increasingly crucial. A
major stream of watermarking strategies implants verifiable backdoor triggers
by poisoning training samples, but these are often unrealistic due to data
privacy and safety concerns and are vulnerable to minor model changes such as
fine-tuning. To overcome these challenges, we propose a safe and robust
backdoor-based watermark injection technique that leverages the diverse
knowledge from a single out-of-distribution (OoD) image, which serves as a
secret key for IP verification. The independence of training data makes it
agnostic to third-party promises of IP security. We induce robustness via
random perturbation of model parameters during watermark injection to defend
against common watermark removal attacks, including fine-tuning, pruning, and
model extraction. Our experimental results demonstrate that the proposed
watermarking approach is not only time- and sample-efficient without training
data, but also robust against the watermark removal attacks above
InDL: A New Datasets and Benchmark for In-Diagram Logic Interpreting based on Visual Illusion
This paper introduces a novel approach to evaluating deep learning models'
capacity for in-diagram logic interpretation. Leveraging the intriguing realm
of visual illusions, we establish a unique dataset, InDL, designed to
rigorously test and benchmark these models. Deep learning has witnessed
remarkable progress in domains such as computer vision and natural language
processing. However, models often stumble in tasks requiring logical reasoning
due to their inherent 'black box' characteristics, which obscure the
decision-making process. Our work presents a new lens to understand these
models better by focusing on their handling of visual illusions -- a complex
interplay of perception and logic. We utilize six classic geometric optical
illusions to create a comparative framework between human and machine visual
perception. This methodology offers a quantifiable measure to rank models,
elucidating potential weaknesses and providing actionable insights for model
improvements. Our experimental results affirm the efficacy of our benchmarking
strategy, demonstrating its ability to effectively rank models based on their
logic interpretation ability. As part of our commitment to reproducible
research, the source code and datasets will be made publicly available here:
\href{https://github.com/rabbit-magic-wh/InDL}{https://github.com/rabbit-magic-wh/InDL}
Revisiting the Knowledge Injection Frameworks
In recent years, large language models (LLMs), such as GPTs, have attained
great impact worldwide. However, how to adapt these LLMs to better suit the
vertical domain-specific tasks by utilizing external knowledge remains not
completely solved. Indeed, there have emerged a few works on this line where
most of them rely on an alignment heuristic that is built to inject the
corresponding knowledge tuple into the associated text sample.
However, despite the promise, we identify a pivotal problem in this work
ubiquitously. Simply put, we find that injecting unaligned (i.e., random)
knowledge tuple into the LLMs achieves comparable (and sometimes better)
results than the aligned knowledge being injected. We therefore take a thorough
investigation of this frustrating finding on a variety of related prior work
and further provide a chain of potential interpretations for the phenomenon.
Based on all that, we offer a simple remediated technique. Briefly, the core of
this technique is rooted in an ideological emphasis on the pruning and
purification of the external knowledge base to be injected into LLMs. At last,
we show that by integrating this technique into most (if not all) knowledge
injection frameworks and recent LLMs, it manages to overcome the aforementioned
sanity problem and further pushes the boundary of the performance of the
domain-adaptive LLMs.Comment: 9 pages, 6 figures, accepted by EMNLP 2023 Mai
- …